15 research outputs found

    Optimal Guidance and Control for Electromagnetic Formation Flying

    Get PDF
    学位の種別: 修士University of Tokyo(東京大学

    Real-time system identification using deep learning for linear processes with application to unmanned aerial vehicles

    Get PDF
    This paper proposes a novel parametric identification approach for linear systems using Deep Learning (DL) and the Modified Relay Feedback Test (MRFT). The proposed methodology utilizes MRFT to reveal distinguishing frequencies about an unknown process; which are then passed to a trained DL model to identify the underlying process parameters. The presented approach guarantees stability and performance in the identification and control phases respectively, and requires few seconds of observation data to infer the dynamic system parameters. Quadrotor Unmanned Aerial Vehicle (UAV) attitude and altitude dynamics were used in simulation and experimentation to verify the presented methodology. Results show the effectiveness and real-time capabilities of the proposed approach, which outperforms the conventional Prediction Error Method in terms of accuracy, robustness to biases, computational efficiency and data requirements.Comment: 13 pages, 9 figures. Submitted to IEEE access. A supplementary video for the work presented in this paper can be accessed at: https://www.youtube.com/watch?v=dz3WTFU7W7c. This version includes minor style edits for appendix and reference

    Neuromorphic Camera Denoising using Graph Neural Network-driven Transformers

    Full text link
    Neuromorphic vision is a bio-inspired technology that has triggered a paradigm shift in the computer-vision community and is serving as a key-enabler for a multitude of applications. This technology has offered significant advantages including reduced power consumption, reduced processing needs, and communication speed-ups. However, neuromorphic cameras suffer from significant amounts of measurement noise. This noise deteriorates the performance of neuromorphic event-based perception and navigation algorithms. In this paper, we propose a novel noise filtration algorithm to eliminate events which do not represent real log-intensity variations in the observed scene. We employ a Graph Neural Network (GNN)-driven transformer algorithm, called GNN-Transformer, to classify every active event pixel in the raw stream into real-log intensity variation or noise. Within the GNN, a message-passing framework, called EventConv, is carried out to reflect the spatiotemporal correlation among the events, while preserving their asynchronous nature. We also introduce the Known-object Ground-Truth Labeling (KoGTL) approach for generating approximate ground truth labels of event streams under various illumination conditions. KoGTL is used to generate labeled datasets, from experiments recorded in chalenging lighting conditions. These datasets are used to train and extensively test our proposed algorithm. When tested on unseen datasets, the proposed algorithm outperforms existing methods by 8.8% in terms of filtration accuracy. Additional tests are also conducted on publicly available datasets to demonstrate the generalization capabilities of the proposed algorithm in the presence of illumination variations and different motion dynamics. Compared to existing solutions, qualitative results verified the superior capability of the proposed algorithm to eliminate noise while preserving meaningful scene events

    High Speed Neuromorphic Vision-Based Inspection of Countersinks in Automated Manufacturing Processes

    Full text link
    Countersink inspection is crucial in various automated assembly lines, especially in the aerospace and automotive sectors. Advancements in machine vision introduced automated robotic inspection of countersinks using laser scanners and monocular cameras. Nevertheless, the aforementioned sensing pipelines require the robot to pause on each hole for inspection due to high latency and measurement uncertainties with motion, leading to prolonged execution times of the inspection task. The neuromorphic vision sensor, on the other hand, has the potential to expedite the countersink inspection process, but the unorthodox output of the neuromorphic technology prohibits utilizing traditional image processing techniques. Therefore, novel event-based perception algorithms need to be introduced. We propose a countersink detection approach on the basis of event-based motion compensation and the mean-shift clustering principle. In addition, our framework presents a robust event-based circle detection algorithm to precisely estimate the depth of the countersink specimens. The proposed approach expedites the inspection process by a factor of 10×\times compared to conventional countersink inspection methods. The work in this paper was validated for over 50 trials on three countersink workpiece variants. The experimental results show that our method provides a precision of 0.025 mm for countersink depth inspection despite the low resolution of commercially available neuromorphic cameras.Comment: 14 pages, 11 figures, 7 tables, submitted to Journal of Intelligent Manufacturin

    Neuromorphic eye-in-hand visual servoing

    Get PDF
    Robotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based camera sets a limitation on continuous visual feedback due to their low sampling rate and redundant data in real-time image processing, especially in the case of high-speed tasks. Event cameras give human-like vision capabilities such as observing the dynamic changes asynchronously at a high temporal resolution (1μs1\mu s) with low latency and wide dynamic range. In this paper, we present a visual servoing method using an event camera and a switching control strategy to explore, reach and grasp to achieve a manipulation task. We devise three surface layers of active events to directly process stream of events from relative motion. A purely event based approach is adopted to extract corner features, localize them robustly using heat maps and generate virtual features for tracking and alignment. Based on the visual feedback, the motion of the robot is controlled to make the temporal upcoming event features converge to the desired event in spatio-temporal space. The controller switches its strategy based on the sequence of operation to establish a stable grasp. The event based visual servoing (EVBS) method is validated experimentally using a commercial robot manipulator in an eye-in-hand configuration. Experiments prove the effectiveness of the EBVS method to track and grasp objects of different shapes without the need for re-tuning.Comment: 8 pages, 10 figure

    Effects of hospital facilities on patient outcomes after cancer surgery: an international, prospective, observational study

    Get PDF
    Background Early death after cancer surgery is higher in low-income and middle-income countries (LMICs) compared with in high-income countries, yet the impact of facility characteristics on early postoperative outcomes is unknown. The aim of this study was to examine the association between hospital infrastructure, resource availability, and processes on early outcomes after cancer surgery worldwide.Methods A multimethods analysis was performed as part of the GlobalSurg 3 study-a multicentre, international, prospective cohort study of patients who had surgery for breast, colorectal, or gastric cancer. The primary outcomes were 30-day mortality and 30-day major complication rates. Potentially beneficial hospital facilities were identified by variable selection to select those associated with 30-day mortality. Adjusted outcomes were determined using generalised estimating equations to account for patient characteristics and country-income group, with population stratification by hospital.Findings Between April 1, 2018, and April 23, 2019, facility-level data were collected for 9685 patients across 238 hospitals in 66 countries (91 hospitals in 20 high-income countries; 57 hospitals in 19 upper-middle-income countries; and 90 hospitals in 27 low-income to lower-middle-income countries). The availability of five hospital facilities was inversely associated with mortality: ultrasound, CT scanner, critical care unit, opioid analgesia, and oncologist. After adjustment for case-mix and country income group, hospitals with three or fewer of these facilities (62 hospitals, 1294 patients) had higher mortality compared with those with four or five (adjusted odds ratio [OR] 3.85 [95% CI 2.58-5.75]; p<0.0001), with excess mortality predominantly explained by a limited capacity to rescue following the development of major complications (63.0% vs 82.7%; OR 0.35 [0.23-0.53]; p<0.0001). Across LMICs, improvements in hospital facilities would prevent one to three deaths for every 100 patients undergoing surgery for cancer.Interpretation Hospitals with higher levels of infrastructure and resources have better outcomes after cancer surgery, independent of country income. Without urgent strengthening of hospital infrastructure and resources, the reductions in cancer-associated mortality associated with improved access will not be realised

    Optimal Trajectory Generation for Electromagnetic Formation Flying

    No full text

    Elastomer-Based Visuotactile Sensor for Normality of Robotic Manufacturing Systems

    No full text
    Modern aircrafts require the assembly of thousands of components with high accuracy and reliability. The normality of drilled holes is a critical geometrical tolerance that is required to be achieved in order to realize an efficient assembly process. Failure to achieve the required tolerance leads to structures prone to fatigue problems and assembly errors. Elastomer-based tactile sensors have been used to support robots in acquiring useful physical interaction information with the environments. However, current tactile sensors have not yet been developed to support robotic machining in achieving the tight tolerances of aerospace structures. In this paper, a novel elastomer-based tactile sensor was developed for cobot machining. Three commercial silicon-based elastomer materials were characterised using mechanical testing in order to select a material with the best deformability. A Finite element model was developed to simulate the deformation of the tactile sensor upon interacting with surfaces with different normalities. Additive manufacturing was employed to fabricate the tactile sensor mould, which was chemically etched to improve the surface quality. The tactile sensor was obtained by directly casting and curing the optimum elastomer material onto the additively manufactured mould. A machine learning approach was used to train the simulated and experimental data obtained from the sensor. The capability of the developed vision tactile sensor was evaluated using real-world experiments with various inclination angles, and achieved a mean perpendicularity tolerance of 0.34°. The developed sensor opens a new perspective on low-cost precision cobot machining
    corecore